In unstructured environments, robots run the risk of unexpected collisions. How well they react to these events is determined by how transparent they are to collisions. Transparency is affected by structural properties as well as sensing and control architectures. In this paper, we propose the collision reflex metric as a way to formally quantify transparency. It is defined as the total impulse transferred in collision, which determines the collision mitigation capabilities of a closed-loop robotic system taking into account structure, sensing, and control. We analyze the effect of motor scaling, stiffness, and configuration on the collision reflex of a system using an analytical model. Physical experiments using the move-until-touch behavior are conducted to compare the collision reflex of direct-drive and quasi-direct-drive actuators and robotic hands (Schunk WSG-50 and Dexterous DDHand.) For transparent systems, we see a counter-intuitive trend: the impulse may be lower at higher pre-impact velocities.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
这项工作介绍了模型预测控制(MPC)的公式,该公式适应基于任务的模型的复杂性,同时保持可行性和稳定性保证。现有的MPC实现通常通过缩短预测范围或简化模型来处理计算复杂性,这两者都可能导致不稳定。受到行为经济学,运动计划和生物力学相关方法的启发,我们的方法通过简单模型解决了MPC问题,用于在地平线区域的动力学和约束,而这种模型是可行的,并且不存在该模型的复杂模型。该方法利用计划和执行的交织来迭代识别这些区域,如果它们满足确切的模板/锚关系,可以安全地简化这些区域。我们表明,该方法不会损害系统的稳定性和可行性特性,并在仿真实验中衡量在四足动物上执行敏捷行为的仿真实验中的性能。我们发现,与固定复杂性实现相比,这种自适应方法可以实现更多的敏捷运动,并扩大可执行任务的范围。
translated by 谷歌翻译
通过移动机器人收集数据的自动化有望提高环境调查的功效,但要求该系统自主确定如何在避免障碍的同时采样环境。现有的方法,例如Boustrophedon分解算法,可以将环境完全覆盖到指定的分辨率上,但是在许多情况下,分布分辨率进行采样将产生长的路径,并具有不可算数的测量值。减少这些路径可能会导致可行的计划,而以分配估计精度为代价。这项工作探讨了分布精度和小路分解算法的路径长度之间的权衡。我们通过计算指标来量化算法性能,以在环境分布中计算蒙特卡洛模拟中的准确性和路径长度。我们强调的是,应将一个目标优先于另一个目标,并提出对算法的修改,以通过更均匀地采样来提高其有效性。这些结果证明了Boustrophedon算法的智能部署如何有效指导自主环境抽样。
translated by 谷歌翻译
模型预测控制(MPC)是控制机器人的流行策略,但由于混合动力学的复杂性质,很难接触系统。为了实现具有联系的系统,动态模型通常被简化或及时固定,以便有效地计划轨迹。在这项工作中,我们将混合迭代线性二次调节器扩展到以MPC方式(HILQR MPC)工作的1)通过1)修改触点模式时如何计算成本函数,2)在模拟刚体动态和3时使用并行处理。 )使用刚体动力学的有效分析衍生化计算。结果是一个可以修改参考行为的接触顺序并凝聚力计划的系统 - 在处理大型扰动时至关重要。 HILQR MPC在两个系统上进行了测试:首先,在简单的驱动弹跳球混合系统上验证了混合成本修改。然后将HILQR MPC与在四倍的机器人(Unitree A1)上使用质心动态假设的方法进行比较。 HILQR MPC在模拟和硬件测试中的表现优于质心方法。
translated by 谷歌翻译
最大信息系数(MIC)是一个强大的统计量,可以识别变量之间的依赖性。但是,它可以应用于敏感数据,并且发布可能会泄漏私人信息。作为解决方案,我们提出算法以提供差异隐私的方式近似麦克风。我们表明,经典拉普拉斯机制的自然应用产生的精度不足。因此,我们介绍了MICT统计量,这是一种新的MIC近似值,与差异隐私更加兼容。我们证明MICS是麦克风的一致估计器,我们提供了两个差异性私有版本。我们对各种真实和合成数据集进行实验。结果表明,私人微统计数据极大地超过了拉普拉斯机制的直接应用。此外,对现实世界数据集的实验显示出准确性,当样本量至少适中时可用。
translated by 谷歌翻译
在本文中,我们提出了一种通过与不确定的表面接触来更新机器人状态信念的方法,并将此更新应用于卡尔曼过滤器,以进行更准确的状态估计。在检查后卫表面不确定性如何影响每种模式的时间时,我们得出了一个护罩盐矩阵 - 在混合事件之前将其映射到混合事件之前的扰动,以使其在扰动之后 - 考虑到结果状态的额外变化。此外,我们建议使用参数化的重置函数 - 捕获未知参数如何将状态从一种模式映射到另一种模式的方式 - 雅各布式的jacobian说明了所得状态中的额外不确定性。通过不确定的过渡事件模拟采样分布并比较所得的协方差,可以显示这些映射的准确性。最后,我们将这些附加术语集成到“不确定性意识到的咸卡尔曼过滤器”中,并在各种测试条件和系统上显示平均估计误差的峰值降低24-60%。
translated by 谷歌翻译
许多用于腿部机器人系统的控制器在离散混合事件下利用开放环控制或闭环控制来增强稳定性。这些控制器出现在几个经过良好研究的现象中,例如Raibert Stepping控制器,Paddle Juggling和Swing Leet Retraction。这项工作介绍了混合事件塑造(HES):一种用于分析和生产稳定混合事件控制器的广义方法。HES利用盐矩阵,该盐矩阵给出了封闭形式的方程,以实现混合事件对稳定性的影响。我们还引入了形状参数,这是可以完全独立于系统动力学以促进稳定性的高阶项。优化方法用于产生这些参数的值,以优化稳定性度量。混合事件塑造捕获了先前开发的控制方法,同时还产生了新的最佳稳定轨迹,而无需连续域反馈。
translated by 谷歌翻译
We introduce a new tool for stochastic convex optimization (SCO): a Reweighted Stochastic Query (ReSQue) estimator for the gradient of a function convolved with a (Gaussian) probability density. Combining ReSQue with recent advances in ball oracle acceleration [CJJJLST20, ACJJS21], we develop algorithms achieving state-of-the-art complexities for SCO in parallel and private settings. For a SCO objective constrained to the unit ball in $\mathbb{R}^d$, we obtain the following results (up to polylogarithmic factors). We give a parallel algorithm obtaining optimization error $\epsilon_{\text{opt}}$ with $d^{1/3}\epsilon_{\text{opt}}^{-2/3}$ gradient oracle query depth and $d^{1/3}\epsilon_{\text{opt}}^{-2/3} + \epsilon_{\text{opt}}^{-2}$ gradient queries in total, assuming access to a bounded-variance stochastic gradient estimator. For $\epsilon_{\text{opt}} \in [d^{-1}, d^{-1/4}]$, our algorithm matches the state-of-the-art oracle depth of [BJLLS19] while maintaining the optimal total work of stochastic gradient descent. We give an $(\epsilon_{\text{dp}}, \delta)$-differentially private algorithm which, given $n$ samples of Lipschitz loss functions, obtains near-optimal optimization error and makes $\min(n, n^2\epsilon_{\text{dp}}^2 d^{-1}) + \min(n^{4/3}\epsilon_{\text{dp}}^{1/3}, (nd)^{2/3}\epsilon_{\text{dp}}^{-1})$ queries to the gradients of these functions. In the regime $d \le n \epsilon_{\text{dp}}^{2}$, where privacy comes at no cost in terms of the optimal loss up to constants, our algorithm uses $n + (nd)^{2/3}\epsilon_{\text{dp}}^{-1}$ queries and improves recent advancements of [KLL21, AFKT21]. In the moderately low-dimensional setting $d \le \sqrt n \epsilon_{\text{dp}}^{3/2}$, our query complexity is near-linear.
translated by 谷歌翻译
There are multiple scales of abstraction from which we can describe the same image, depending on whether we are focusing on fine-grained details or a more global attribute of the image. In brain mapping, learning to automatically parse images to build representations of both small-scale features (e.g., the presence of cells or blood vessels) and global properties of an image (e.g., which brain region the image comes from) is a crucial and open challenge. However, most existing datasets and benchmarks for neuroanatomy consider only a single downstream task at a time. To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image. Our multi-task neuroimaging benchmark (MTNeuro) is built on volumetric, micrometer-resolution X-ray microtomography images spanning a large thalamocortical section of mouse brain, encompassing multiple cortical and subcortical regions. We generated a number of different prediction challenges and evaluated several supervised and self-supervised models for brain-region prediction and pixel-level semantic segmentation of microstructures. Our experiments not only highlight the rich heterogeneity of this dataset, but also provide insights into how self-supervised approaches can be used to learn representations that capture multiple attributes of a single image and perform well on a variety of downstream tasks. Datasets, code, and pre-trained baseline models are provided at: https://mtneuro.github.io/ .
translated by 谷歌翻译